2,432 research outputs found

    Inequality, Inequity Aversion, and the Provision of Public Goods

    Get PDF
    We investigate the effects of inequality in wealth on the incentives to contribute to a public good when agents are inequity averse and may differ in ability. We show that equality may lead to a reduction of public good provision below levels generated by purely selfish agents. But introducing inequality motivates more productive agents to exert higher efforts and help the group to coordinate on equilibria with less free-riding. As a result, less able agents may benefit from initially disadvantageous inequality. Moreover, the more inequity averse the agents, the more inequality should be imposed even by an egalitarian social planner.public goods, inequality, inequity aversion, social welfare, voluntary provision, income distribution, heterogeneity

    A deep learning framework for quality assessment and restoration in video endoscopy

    Full text link
    Endoscopy is a routine imaging technique used for both diagnosis and minimally invasive surgical treatment. Artifacts such as motion blur, bubbles, specular reflections, floating objects and pixel saturation impede the visual interpretation and the automated analysis of endoscopy videos. Given the widespread use of endoscopy in different clinical applications, we contend that the robust and reliable identification of such artifacts and the automated restoration of corrupted video frames is a fundamental medical imaging problem. Existing state-of-the-art methods only deal with the detection and restoration of selected artifacts. However, typically endoscopy videos contain numerous artifacts which motivates to establish a comprehensive solution. We propose a fully automatic framework that can: 1) detect and classify six different primary artifacts, 2) provide a quality score for each frame and 3) restore mildly corrupted frames. To detect different artifacts our framework exploits fast multi-scale, single stage convolutional neural network detector. We introduce a quality metric to assess frame quality and predict image restoration success. Generative adversarial networks with carefully chosen regularization are finally used to restore corrupted frames. Our detector yields the highest mean average precision (mAP at 5% threshold) of 49.0 and the lowest computational time of 88 ms allowing for accurate real-time processing. Our restoration models for blind deblurring, saturation correction and inpainting demonstrate significant improvements over previous methods. On a set of 10 test videos we show that our approach preserves an average of 68.7% which is 25% more frames than that retained from the raw videos.Comment: 14 page

    On the Complexity of Nucleolus Computation for Bipartite b-Matching Games

    Full text link
    We explore the complexity of nucleolus computation in b-matching games on bipartite graphs. We show that computing the nucleolus of a simple b-matching game is NP-hard even on bipartite graphs of maximum degree 7. We complement this with partial positive results in the special case where b values are bounded by 2. In particular, we describe an efficient algorithm when a constant number of vertices satisfy b(v) = 2 as well as an efficient algorithm for computing the non-simple b-matching nucleolus when b = 2

    WHAT YOU SEE IS WHAT YOU GET (WYSIWYG) STAGE EDITOR FOR USERS WHO SHARE APPLICATIONS

    Get PDF
    Techniques are provided herein for a What You See Is What You Get (WYSIWYG) stage editor for the presenter in a meeting. These techniques no longer require more than two people (one as a presenter to share content and another to take on the producer role as stage editor). These techniques also resolve infinity effects caused by screen-sharing (one display only)

    Replicability in Reinforcement Learning

    Full text link
    We initiate the mathematical study of replicability as an algorithmic property in the context of reinforcement learning (RL). We focus on the fundamental setting of discounted tabular MDPs with access to a generative model. Inspired by Impagliazzo et al. [2022], we say that an RL algorithm is replicable if, with high probability, it outputs the exact same policy after two executions on i.i.d. samples drawn from the generator when its internal randomness is the same. We first provide an efficient ρ\rho-replicable algorithm for (ε,δ)(\varepsilon, \delta)-optimal policy estimation with sample and time complexity O~(N3log(1/δ)(1γ)5ε2ρ2)\widetilde O\left(\frac{N^3\cdot\log(1/\delta)}{(1-\gamma)^5\cdot\varepsilon^2\cdot\rho^2}\right), where NN is the number of state-action pairs. Next, for the subclass of deterministic algorithms, we provide a lower bound of order Ω(N3(1γ)3ε2ρ2)\Omega\left(\frac{N^3}{(1-\gamma)^3\cdot\varepsilon^2\cdot\rho^2}\right). Then, we study a relaxed version of replicability proposed by Kalavasis et al. [2023] called TV indistinguishability. We design a computationally efficient TV indistinguishable algorithm for policy estimation whose sample complexity is O~(N2log(1/δ)(1γ)5ε2ρ2)\widetilde O\left(\frac{N^2\cdot\log(1/\delta)}{(1-\gamma)^5\cdot\varepsilon^2\cdot\rho^2}\right). At the cost of exp(N)\exp(N) running time, we transform these TV indistinguishable algorithms to ρ\rho-replicable ones without increasing their sample complexity. Finally, we introduce the notion of approximate-replicability where we only require that two outputted policies are close under an appropriate statistical divergence (e.g., Renyi) and show an improved sample complexity of O~(Nlog(1/δ)(1γ)5ε2ρ2)\widetilde O\left(\frac{N\cdot\log(1/\delta)}{(1-\gamma)^5\cdot\varepsilon^2\cdot\rho^2}\right).Comment: to be published in neurips 202

    Scrape, Cut, Paste and Learn: Automated Dataset Generation Applied to Parcel Logistics

    Full text link
    State-of-the-art approaches in computer vision heavily rely on sufficiently large training datasets. For real-world applications, obtaining such a dataset is usually a tedious task. In this paper, we present a fully automated pipeline to generate a synthetic dataset for instance segmentation in four steps. In contrast to existing work, our pipeline covers every step from data acquisition to the final dataset. We first scrape images for the objects of interest from popular image search engines and since we rely only on text-based queries the resulting data comprises a wide variety of images. Hence, image selection is necessary as a second step. This approach of image scraping and selection relaxes the need for a real-world domain-specific dataset that must be either publicly available or created for this purpose. We employ an object-agnostic background removal model and compare three different methods for image selection: Object-agnostic pre-processing, manual image selection and CNN-based image selection. In the third step, we generate random arrangements of the object of interest and distractors on arbitrary backgrounds. Finally, the composition of the images is done by pasting the objects using four different blending methods. We present a case study for our dataset generation approach by considering parcel segmentation. For the evaluation we created a dataset of parcel photos that were annotated automatically. We find that (1) our dataset generation pipeline allows a successful transfer to real test images (Mask AP 86.2), (2) a very accurate image selection process - in contrast to human intuition - is not crucial and a broader category definition can help to bridge the domain gap, (3) the usage of blending methods is beneficial compared to simple copy-and-paste. We made our full code for scraping, image composition and training publicly available at https://a-nau.github.io/parcel2d.Comment: Accepted at ICMLA 202

    Replicable Clustering

    Full text link
    We design replicable algorithms in the context of statistical clustering under the recently introduced notion of replicability from Impagliazzo et al. [2022]. According to this definition, a clustering algorithm is replicable if, with high probability, its output induces the exact same partition of the sample space after two executions on different inputs drawn from the same distribution, when its internal randomness is shared across the executions. We propose such algorithms for the statistical kk-medians, statistical kk-means, and statistical kk-centers problems by utilizing approximation routines for their combinatorial counterparts in a black-box manner. In particular, we demonstrate a replicable O(1)O(1)-approximation algorithm for statistical Euclidean kk-medians (kk-means) with poly(d)\operatorname{poly}(d) sample complexity. We also describe an O(1)O(1)-approximation algorithm with an additional O(1)O(1)-additive error for statistical Euclidean kk-centers, albeit with exp(d)\exp(d) sample complexity. In addition, we provide experiments on synthetic distributions in 2D using the kk-means++ implementation from sklearn as a black-box that validate our theoretical results.Comment: to be published in NeurIPS 202

    Local Detection of Topical Entities Using Machine Learning

    Get PDF
    Computer-implemented systems and methods for determining topics of displayed content are provided while maintaining user data privacy and security. Entity identification and topic determination models may be stored within a user computing device such that the user computing device may perform topic detection of content presently displayed on the user computing device to maintain user data privacy. Once a topic(s) is determined from the content, features within the user computing device may be enabled or tailored to a user based on the content being displayed
    corecore